-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Feature Proposal: Scale Buffer #15812
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: elijah-rou The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @elijah-rou. Thanks for your PR. I'm waiting for a knative member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/ok-to-test |
@elijah-rou: The following test failed, say
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Any chance you can elaborate on the problem more? |
There are many clients of ours that wish to maintain static capacity so that they can handle burst effectively. These are concurrency=1, GPU workload scenarios. At low traffic volumes, they would like n machines to be available to handle load, but cannot rely on purely proportional scaling through the scaling target at low enough volumes. They also do not want to overprovision capacity. For eg: I know my traffic roughly varies in the 10 node range. ie at any point, 10 more requests may come in on the scaling evaluation interval. If I set my buffer to 10, that means that if I have 0 in-flight requests, I will have 0+10 machines total. If I have 60 in-flight requests, I then have 60+10 machines total. This buffer maintains even as requests grow. This works in tandem with the autoscaler, which is where this becomes useful. Say I have set |
Unsure if you're aware we have an activation scale knob - so when scaling from zero it will hit a min number of replicas See: https://knative.dev/docs/serving/autoscaling/scale-bounds/#scale-up-minimum Though it will eventually scale down based on the request load - there was a discussion about that here: #14017 |
We also have target burst capacity knob as well |
I am aware of both of these, neither cater for the use case I am proposing. Scale up activation only works from zero, it does not help when we have already scaled. Target burst does not help either, we do not want requests to be queued onto targets. These requests can take anything from 5 minutes to 3 hours, and concurrency is 1 so it will block target burst queued requests. If a request prematurely queues, it can get stuck on an instance indefinitely. Target burst capacity in this instance must be -1 (there is no target burst capacity). This feature predominately addresses concurrency=1 scenarios for GPU workloads. |
Hi @elijah-rou, If I understand correctly, the proposal is to keep a buffer of replicas around in order to make sure you have enough capacity to guarantee that latency is acceptable (no queues) and absorb spikes. One request per replica is because of model restriction or resources? In any case, buffer based capacity management and autoscaling is already a method used elsewhere where capacity is always above demand at any given time. I think it would make more sense to have a buffer percentage and not a static value as it would not depend on traffic patterns. I understand though that replicas are attached to gpus and you want to control the number somehow. Also it would be preferable to be able to to adjust the buffer percentage based on the volatility (changes in variance) and cover changing spike patterns. Here it seems things are static aiming for the buffer that covers the specific workload demands. I guess the goal with the proposal here is to adjust the buffer externally via some tooling per service. To summarize you want to roughly follow the demand curve and you use (KPA + a static buffer) to implement it. |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
I am impartial to whether we merge this. Entirely depends on if you guys feel users can make use of it or if there is indeed something more suitable (I would be happy for some sort of dynamic percentage based on both volatility & volume as you mentioned instead for eg). I am happy to use this as a starting point to tackle #15068 instead, and I can aim for 1.19 release (since 1.18 is round the corner) |
I think we shouldn't merge and discuss (in the issue) what would have more broader appeal. I think supporting just I also don't want to make serving overly complex - the goal is to simplify things for app developers. So there's a balance to be had. |
Proposal: Introduce a "Scale Buffer", which ensures that
n
extra service instances are always running.We have had a number of customers which desire control over the number of instances that are available to serve requests. This is particularly important with GPU instances, which typically can only process a single request at a time. Specifically, they have voiced need for a feature that is able to ensure that
n
instances are always available to serve requests, above what the autoscale suggests. This ensures that even at low volumes, there is enough capacity.This proposal introduces a
scale-buffer
annotation on the service manifest, which statically addsn
to the desired pod count above the autoscaler suggestion for a KPA. Even though logic is pretty simple, we currently have this running in our KNative fork and it is working well. I figured it could be useful to others that are also running low-concurrency/no-concurrency workloads in the KNative community. Happy to amend it as needed if the maintainers wish to accept it as a proposal.